524 research outputs found

    A Basis for Interactive Schema Merging

    Get PDF
    We present a technique for merging the schemas of heterogeneous databases that generalizes to several different data models, and show how it can be used in an interactive program that merges Entity-Relationship diagrams. Given a collection of schemas to be merged, the user asserts the correspondence between entities and relationships in the various schemas by defining "isa" relations between them. These assertions are then considered to be elementary schemas, and are combined with the elementary schemas in the merge. Since the method defines the merge to be the join in an information ordering on schemas, it is a commutative and associative operation, which means that the merge is defined independent of the order in which schemas are presented. We briefly describe a prototype interactive schema merging tool that has been built on these principles. Keywords: schemas, merging, semantic data models, entity-relationship data models, inheritance 1 Introduction Schema merging is the proble..

    Towards a Declarative Query and Transformation Language for XML and Semistructured Data: Simulation Unification

    Get PDF
    The growing importance of XML as a data interchange standard demands languages for data querying and transformation. Since the mid 90es, several such languages have been proposed that are inspired from functional languages (such as XSLT [1]) and/or database query languages (such as XQuery [2]). This paper addresses applying logic programming concepts and techniques to designing a declarative, rule-based query and transformation language for XML and semistructured data. The paper first introduces issues specific to XML and semistructured data such as the necessity of flexible “query terms” and of “construct terms”. Then, it is argued that logic programming concepts are particularly appropriate for a declarative query and transformation language for XML and semistructured data. Finally, a new form of unification, called “simulation unification”, is proposed for answering “query terms”, and it is illustrated on examples

    Using schema transformation pathways for data lineage tracing

    Get PDF
    With the increasing amount and diversity of information available on the Internet, there has been a huge growth in information systems that need to integrate data from distributed, heterogeneous data sources. Tracing the lineage of the integrated data is one of the problems being addressed in data warehousing research. This paper presents a data lineage tracing approach based on schema transformation pathways. Our approach is not limited to one specific data model or query language, and would be useful in any data transformation/integration framework based on sequences of primitive schema transformations

    Dynamic Provenance for SPARQL Update

    Get PDF
    While the Semantic Web currently can exhibit provenance information by using the W3C PROV standards, there is a "missing link" in connecting PROV to storing and querying for dynamic changes to RDF graphs using SPARQL. Solving this problem would be required for such clear use-cases as the creation of version control systems for RDF. While some provenance models and annotation techniques for storing and querying provenance data originally developed with databases or workflows in mind transfer readily to RDF and SPARQL, these techniques do not readily adapt to describing changes in dynamic RDF datasets over time. In this paper we explore how to adapt the dynamic copy-paste provenance model of Buneman et al. [2] to RDF datasets that change over time in response to SPARQL updates, how to represent the resulting provenance records themselves as RDF in a manner compatible with W3C PROV, and how the provenance information can be defined by reinterpreting SPARQL updates. The primary contribution of this paper is a semantic framework that enables the semantics of SPARQL Update to be used as the basis for a 'cut-and-paste' provenance model in a principled manner.Comment: Pre-publication version of ISWC 2014 pape

    Magnetic reconnection during collisionless, stressed, X-point collapse using Particle-in-Cell simulation

    Full text link
    Two cases of weakly and strongly stressed X-point collapse were considered. Here descriptors weakly and strongly refer to 20 % and 124 % unidirectional spatial compression of the X-point, respectively. In the weakly stressed case, the reconnection rate, defined as the out-of-plane electric field in the X-point (the magnetic null) normalised by the product of external magnetic field and Alfv\'en speeds, peaks at 0.11, with its average over 1.25 Alfv\'en times being 0.04. Electron energy distribution in the current sheet, at the high energy end of the spectrum, shows a power law distribution with the index varying in time, attaining a maximal value of -4.1 at the final simulation time step (1.25 Alfv\'en times). In the strongly stressed case, magnetic reconnection peak occurs 3.4 times faster and is more efficient. The peak reconnection rate now attains value 2.5, with the average reconnection rate over 1.25 Alfv\'en times being 0.5. The power law energy spectrum for the electrons in the current sheet attains now a steeper index of -5.5, a value close to the ones observed in the vicinity of X-type region in the Earth's magneto-tail. Within about one Alfv\'en time, 2% and 20% of the initial magnteic energy is converted into heat and accelerated particle energy in the case of weak and strong stress, respectively. In the both cases, during the peak of the reconnection, the quadruple out-of-plane magnetic field is generated, hinting possibly to the Hall regime of the reconnection. These results strongly suggest the importance of the collionless, stressed X-point collapse as a possible contributing factor to the solution of the solar coronal heating problem or more generally, as an efficient mechanism of converting magnetic energy into heat and super-thermal particle energy.Comment: Final Accepted Version (Physics of Plasmas in Press 2007

    Adventures in Invariant Theory

    Full text link
    We provide an introduction to enumerating and constructing invariants of group representations via character methods. The problem is contextualised via two case studies arising from our recent work: entanglement measures, for characterising the structure of state spaces for composite quantum systems; and Markov invariants, a robust alternative to parameter-estimation intensive methods of statistical inference in molecular phylogenetics.Comment: 12 pp, includes supplementary discussion of example

    Representing Partitions on Trees

    Get PDF
    In evolutionary biology, biologists often face the problem of constructing a phylogenetic tree on a set X of species from a multiset Π of partitions corresponding to various attributes of these species. One approach that is used to solve this problem is to try instead to associate a tree (or even a network) to the multiset ΣΠ consisting of all those bipartitions {A,X − A} with A a part of some partition in Π. The rational behind this approach is that a phylogenetic tree with leaf set X can be uniquely represented by the set of bipartitions of X induced by its edges. Motivated by these considerations, given a multiset Σ of bipartitions corresponding to a phylogenetic tree on X, in this paper we introduce and study the set P(Σ) consisting of those multisets of partitions Π of X with ΣΠ = Σ. More specifically, we characterize when P(Σ) is non-empty, and also identify some partitions in P(Σ) that are of maximum and minimum size. We also show that it is NP-complete to decide when P(Σ) is non-empty in case Σ is an arbitrary multiset of bipartitions of X. Ultimately, we hope that by gaining a better understanding of the mapping that takes an arbitrary partition system Π to the multiset ΣΠ, we will obtain new insights into the use of median networks and, more generally, split-networks to visualize sets of partitions

    What the Web Has Done for Scientific Data – and What It Hasn’t

    Full text link
    The web, together with database technology, has radically changed the way scientific research is conducted. Scientists now have access to an unprecedented quantity and range of data, and the speed and ease of communication of all forms of scientific data has increased hugely. This change has come at a price. Web and database technology no longer support some of the desirable properties of paper publication, and it has introduced new problems in maintaining the scientific record. This brief paper is an examination of some of these issues

    WhiteHaul: an efficient spectrum aggregation system for low-cost and high capacity backhaul over white spaces

    Get PDF
    We address the challenge of backhaul connectivity for rural and developing regions, which is essential for universal fixed/mobile Internet access. To this end, we propose to exploit the TV white space (TVWS) spectrum for its attractive properties: low cost, abundance in under-served regions and favorable propagation characteristics. Specifically, we propose a system called WhiteHaul for the efficient aggregation of the TVWS spectrum tailored for the backhaul use case. At the core of WhiteHaul are two key innovations: (i) a TVWS conversion substrate that can efficiently handle multiple non-contiguous chunks of TVWS spectrum using multiple low cost 802.11n/ac cards but with a single antenna; (ii) novel use of MPTCP as a link-level tunnel abstraction and its use for efficiently aggregating multiple chunks of the TVWS spectrum via a novel uncoupled, cross-layer congestion control algorithm. Through extensive evaluations using a prototype implementation of WhiteHaul, we show that: (a) WhiteHaul can aggregate almost the whole of TV band with 3 interfaces and achieve nearly 600Mbps TCP throughput; (b) the WhiteHaul MPTCP congestion control algorithm provides an order of magnitude improvement over state of the art algorithms for typical TVWS backhaul links. We also present additional measurement and simulation based results to evaluate other aspects of the WhiteHaul design

    WhiteHaul: white space spectrum aggregation system for backhaul

    Get PDF
    Today almost half the world's population does not have Internet access. This is particularly the case in rural and undeserved regions where providing Internet access infrastructure is challenging and expensive. To this end, we present demonstration of WhiteHaul [5], a low-cost hybrid cross-layer aggregation system for TV White Space (TVWS) based backhaul. WhiteHaul features a custom-designed frequency conversion substrate that efficiently handles multiple noncontiguous chunks of TVWS spectrum using multiple low-cost COTS 802.11n/ac cards but with a single antenna. At the software layer, WhiteHaul uses MPTCP as a link-level tunnel abstraction to efficiently aggregate multiple chunks of the TVWS spectrum via a novel uncoupled, cross-layer congestion control algorithm. This demo illustrates the unique features of the WhiteHaul system based on a prototype implementation employing a modified version of MPTCP Linux Kernel and a custom-designed conversion substrate. Using this prototype, we highlight the performance of the WhiteHaul system under various configurations and network conditions
    • …
    corecore